Skip to content

[clusteragent/autoscaling] Defer autoscaling stack startup until first DPA or autoscaling workload#50305

Open
davidor wants to merge 9 commits into
mainfrom
davidor/contp-1632-autoscaling-lazy-start
Open

[clusteragent/autoscaling] Defer autoscaling stack startup until first DPA or autoscaling workload#50305
davidor wants to merge 9 commits into
mainfrom
davidor/contp-1632-autoscaling-lazy-start

Conversation

@davidor
Copy link
Copy Markdown
Member

@davidor davidor commented May 4, 2026

What does this PR do?

The goal of this PR is to allow enabling workload autoscaling without extra cost when it's not in use.

Right now, when autoscaling is enabled, the kubeapiserver workloadmeta collector starts a pod reflector among other things. In large clusters, this can use a lot of memory. This happens even if no DPA is deployed.

We want to avoid this memory usage when no DPAs are deployed.

This will let us enable workload autoscaling by default without extra cost when it's not in use. Users who want it will be able to create DPAs directly, without having to enable the option in the Cluster Agent.

This PR does not flip the autoscaling.workload.enabled default to true. That will be done in a separate PR so it can be reverted independently if needed.

Describe how you validated your changes

Unit tests plus tests on a local kind cluster.

For the kind tests, I used kwok to simulate a large number of pods so the memory impact of the pod reflector would be measurable.

First, I deployed with autoscaling disabled to get a memory baseline. Then I deployed with autoscaling enabled but no DPAs, and checked that memory usage stayed close to the baseline. Finally, I created a DPA and checked that memory usage went up as expected.

@davidor davidor added this to the 7.80.0 milestone May 4, 2026
@davidor davidor added the qa/done QA done before merge and regressions are covered by tests label May 4, 2026
@dd-octo-sts dd-octo-sts Bot added the internal Identify a non-fork PR label May 4, 2026
@davidor davidor added the changelog/no-changelog No changelog entry needed label May 4, 2026
@dd-octo-sts dd-octo-sts Bot added the team/container-platform The Container Platform Team label May 4, 2026
@github-actions github-actions Bot added the long review PR is complex, plan time to review it label May 4, 2026
@dd-octo-sts
Copy link
Copy Markdown
Contributor

dd-octo-sts Bot commented May 4, 2026

Go Package Import Differences

Baseline: 3227852
Comparison: 406db0c

binaryosarchchange
cluster-agentlinuxamd64
+1, -0
+github.com/DataDog/datadog-agent/pkg/clusteragent/autoscaling/autoscalinggate
cluster-agentlinuxarm64
+1, -0
+github.com/DataDog/datadog-agent/pkg/clusteragent/autoscaling/autoscalinggate

@datadog-official
Copy link
Copy Markdown
Contributor

datadog-official Bot commented May 4, 2026

🎯 Code Coverage (details)
Patch Coverage: 40.60%
Overall Coverage: 50.30% (-0.00%)

This comment will be updated automatically if new data arrives.
🔗 Commit SHA: 406db0c | Docs | Datadog PR Page | Give us feedback!

@dd-octo-sts
Copy link
Copy Markdown
Contributor

dd-octo-sts Bot commented May 4, 2026

Files inventory check summary

File checks results against ancestor 3227852e:

Results for datadog-agent_7.80.0~devel.git.838.406db0c.pipeline.113349055-1_amd64.deb:

No change detected

@dd-octo-sts
Copy link
Copy Markdown
Contributor

dd-octo-sts Bot commented May 4, 2026

Static quality checks

✅ Please find below the results from static quality gates
Comparison made with ancestor 3227852
📊 Static Quality Gates Dashboard
🔗 SQG Job

Successful checks

Info

Quality gate Change Size (prev → curr → max)
docker_cluster_agent_amd64 +12.13 KiB (0.01% increase) 206.857 → 206.869 → 207.710
31 successful checks with minimal change (< 2 KiB)
Quality gate Current Size
agent_deb_amd64 744.376 MiB
agent_deb_amd64_fips 702.400 MiB
agent_heroku_amd64 310.080 MiB
agent_rpm_amd64 744.360 MiB
agent_rpm_amd64_fips 702.383 MiB
agent_rpm_arm64 722.002 MiB
agent_rpm_arm64_fips 683.160 MiB
agent_suse_amd64 744.360 MiB
agent_suse_amd64_fips 702.383 MiB
agent_suse_arm64 722.002 MiB
agent_suse_arm64_fips 683.160 MiB
docker_agent_amd64 804.522 MiB
docker_agent_arm64 806.977 MiB
docker_agent_jmx_amd64 995.441 MiB
docker_agent_jmx_arm64 986.676 MiB
docker_cluster_agent_arm64 220.826 MiB
docker_cws_instrumentation_amd64 7.154 MiB
docker_cws_instrumentation_arm64 6.689 MiB
docker_dogstatsd_amd64 39.503 MiB
docker_dogstatsd_arm64 37.690 MiB
docker_host_profiler_amd64 302.254 MiB
docker_host_profiler_arm64 313.761 MiB
dogstatsd_deb_amd64 30.162 MiB
dogstatsd_deb_arm64 28.284 MiB
dogstatsd_rpm_amd64 30.162 MiB
dogstatsd_suse_amd64 30.162 MiB
iot_agent_deb_amd64 44.457 MiB
iot_agent_deb_arm64 41.421 MiB
iot_agent_deb_armhf 42.130 MiB
iot_agent_rpm_amd64 44.457 MiB
iot_agent_suse_amd64 44.457 MiB

@cit-pr-commenter-54b7da
Copy link
Copy Markdown

cit-pr-commenter-54b7da Bot commented May 4, 2026

Regression Detector

Regression Detector Results

Metrics dashboard
Target profiles
Run ID: 6d2787ac-77a6-47f9-b9bd-ea585f61e6ee

Baseline: 3227852
Comparison: 406db0c
Diff

Optimization Goals: ✅ No significant changes detected

Experiments ignored for regressions

Regressions in experiments with settings containing erratic: true are ignored.

perf experiment goal Δ mean % Δ mean % CI trials links
docker_containers_cpu % cpu utilization -0.52 [-3.41, +2.36] 1 Logs

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI trials links
docker_containers_memory memory utilization +0.45 [+0.34, +0.56] 1 Logs
ddot_logs memory utilization +0.29 [+0.22, +0.36] 1 Logs
file_tree memory utilization +0.22 [+0.17, +0.26] 1 Logs
otlp_ingest_logs memory utilization +0.14 [+0.04, +0.23] 1 Logs
tcp_dd_logs_filter_exclude ingress throughput +0.02 [-0.09, +0.13] 1 Logs
uds_dogstatsd_20mb_12k_contexts_20_senders memory utilization +0.00 [-0.05, +0.06] 1 Logs
uds_dogstatsd_to_api_v3 ingress throughput -0.00 [-0.21, +0.20] 1 Logs
ddot_metrics_sum_delta memory utilization -0.01 [-0.20, +0.18] 1 Logs
file_to_blackhole_100ms_latency egress throughput -0.02 [-0.15, +0.12] 1 Logs
quality_gate_idle_all_features memory utilization -0.03 [-0.07, +0.01] 1 Logs bounds checks dashboard
file_to_blackhole_1000ms_latency egress throughput -0.04 [-0.49, +0.41] 1 Logs
uds_dogstatsd_to_api ingress throughput -0.05 [-0.26, +0.15] 1 Logs
file_to_blackhole_0ms_latency egress throughput -0.06 [-0.58, +0.47] 1 Logs
file_to_blackhole_500ms_latency egress throughput -0.06 [-0.46, +0.34] 1 Logs
quality_gate_idle memory utilization -0.07 [-0.12, -0.02] 1 Logs bounds checks dashboard
ddot_metrics memory utilization -0.11 [-0.31, +0.10] 1 Logs
otlp_ingest_metrics memory utilization -0.16 [-0.32, -0.00] 1 Logs
ddot_metrics_sum_cumulativetodelta_exporter memory utilization -0.22 [-0.46, +0.01] 1 Logs
quality_gate_metrics_logs memory utilization -0.27 [-0.52, -0.03] 1 Logs bounds checks dashboard
tcp_syslog_to_blackhole ingress throughput -0.45 [-0.66, -0.25] 1 Logs
docker_containers_cpu % cpu utilization -0.52 [-3.41, +2.36] 1 Logs
ddot_metrics_sum_cumulative memory utilization -0.70 [-0.85, -0.54] 1 Logs
quality_gate_logs % cpu utilization -1.40 [-2.40, -0.40] 1 Logs bounds checks dashboard

Bounds Checks: ✅ Passed

perf experiment bounds_check_name replicates_passed observed_value links
docker_containers_cpu simple_check_run 10/10 703 ≥ 26
docker_containers_memory memory_usage 10/10 245.20MiB ≤ 370MiB
docker_containers_memory simple_check_run 10/10 715 ≥ 26
file_to_blackhole_0ms_latency memory_usage 10/10 0.16GiB ≤ 1.20GiB
file_to_blackhole_0ms_latency missed_bytes 10/10 0B = 0B
file_to_blackhole_1000ms_latency memory_usage 10/10 0.20GiB ≤ 1.20GiB
file_to_blackhole_1000ms_latency missed_bytes 10/10 0B = 0B
file_to_blackhole_100ms_latency memory_usage 10/10 0.17GiB ≤ 1.20GiB
file_to_blackhole_100ms_latency missed_bytes 10/10 0B = 0B
file_to_blackhole_500ms_latency memory_usage 10/10 0.18GiB ≤ 1.20GiB
file_to_blackhole_500ms_latency missed_bytes 10/10 0B = 0B
quality_gate_idle intake_connections 10/10 3 ≤ 4 bounds checks dashboard
quality_gate_idle memory_usage 10/10 144.81MiB ≤ 147MiB bounds checks dashboard
quality_gate_idle_all_features intake_connections 10/10 3 ≤ 4 bounds checks dashboard
quality_gate_idle_all_features memory_usage 10/10 484.39MiB ≤ 495MiB bounds checks dashboard
quality_gate_logs intake_connections 10/10 4 ≤ 6 bounds checks dashboard
quality_gate_logs memory_usage 10/10 176.08MiB ≤ 195MiB bounds checks dashboard
quality_gate_logs missed_bytes 10/10 0B = 0B bounds checks dashboard
quality_gate_metrics_logs cpu_usage 10/10 352.43 ≤ 2000 bounds checks dashboard
quality_gate_metrics_logs intake_connections 10/10 3 ≤ 6 bounds checks dashboard
quality_gate_metrics_logs memory_usage 10/10 374.42MiB ≤ 430MiB bounds checks dashboard
quality_gate_metrics_logs missed_bytes 10/10 0B = 0B bounds checks dashboard

Explanation

Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

CI Pass/Fail Decision

Passed. All Quality Gates passed.

  • quality_gate_metrics_logs, bounds check missed_bytes: 10/10 replicas passed. Gate passed.
  • quality_gate_metrics_logs, bounds check cpu_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_metrics_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_metrics_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
  • quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_idle_all_features, bounds check intake_connections: 10/10 replicas passed. Gate passed.
  • quality_gate_idle, bounds check intake_connections: 10/10 replicas passed. Gate passed.
  • quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check missed_bytes: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.

@davidor davidor changed the title [clusteragent/autoscaling] Defer pod reflector startup until first DPA [clusteragent/autoscaling] Defer autoscaling stack startup until first DPA May 4, 2026
@davidor davidor force-pushed the davidor/contp-1632-autoscaling-lazy-start branch from 2e6d3d3 to 6592584 Compare May 5, 2026 07:11
@davidor davidor marked this pull request as ready for review May 5, 2026 07:49
@davidor davidor requested review from a team as code owners May 5, 2026 07:49
Comment thread pkg/clusteragent/admission/mutate/autoscaling/autoscaling.go Outdated
Comment thread pkg/clusteragent/admission/mutate/autoscaling/autoscaling.go
Comment thread pkg/clusteragent/autoscaling/workload/provider/lazy_start.go
Comment thread pkg/clusteragent/autoscaling/autoscalinggate/gate.go
Comment thread cmd/cluster-agent/subcommands/start/command.go Outdated
Comment thread comp/core/workloadmeta/collectors/internal/kubeapiserver/kubeapiserver.go Outdated
Comment thread comp/core/workloadmeta/collectors/internal/kubeapiserver/kubeapiserver.go Outdated
@davidor davidor force-pushed the davidor/contp-1632-autoscaling-lazy-start branch from 6592584 to 4a2acf7 Compare May 6, 2026 10:20
@davidor
Copy link
Copy Markdown
Member Author

davidor commented May 6, 2026

@AlexanderYastrebov thanks for the review. I addressed your comments.

gh-worker-dd-mergequeue-cf854d Bot pushed a commit that referenced this pull request May 8, 2026
### What does this PR do?

This PR adds a new workloadmeta catalog specific to the cluster-agent.

This is similar to what already exists for other agents: there's a catalog for dogstatsd, otel, etc.

The reason I'm doing this is that the cluster-agent only needs one collector from the catalog: kubeapiserver. And this collector isn't needed in any other sub-agent. I think having a dedicated catalog makes the code easier to reason about, and avoids pulling in dependencies we don't need (not many in this case).

This change also helps simplify another PR I have open: #50305

I think there's something else we can do about workloadmeta catalogs. There's a "global" catalog, but I think most places that use it could use the "core" catalog instead. I'll leave this for a future PR to avoid introducing too many changes at once.

### Describe how you validated your changes

CI + deployed locally on a kind cluster. I verified that kubeapiserver collector still works in the DCA by checking `agent workload-list`. Also verified that the `agent check` command still works in the DCA (`agent check kubernetes_apiserver`).

Co-authored-by: david.ortiz <david.ortiz@datadoghq.com>
@davidor davidor force-pushed the davidor/contp-1632-autoscaling-lazy-start branch from 4a2acf7 to af2740e Compare May 8, 2026 10:44
@davidor
Copy link
Copy Markdown
Member Author

davidor commented May 8, 2026

Rebased on top of main to pick #50482
That PR allows to simplify the code on this one a bit. I added a new commit [workloadmeta/kubeapiserver] Make autoscaling gate a required dependency

if _, err := dynamicInformer.ForResource(workload.PodAutoscalerGVR).Informer().AddEventHandler(handlers); err != nil {
return fmt.Errorf("cannot add gate handler to DatadogPodAutoscaler informer: %w", err)
}
if _, err := dynamicInformer.ForResource(podAutoscalerClusterProfileGVR).Informer().AddEventHandler(handlers); err != nil {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems tricky to me. Some profiles are created OOTB (when autoscaling is started), it implies two thingsL

  • That if you activated Autoscaling once (ever), it will always be activated.
  • That people cannot use our OOTB profiles unless they either created a DPA or a Profile, while the goal of the OOTB profiles is to allow people to put labels on their workloads. It would imply that you also need to check for workloads with labels in the lazy start

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. I realized I was not really aware of how OOTB profiles are supposed to work.

I agree. The current approach doesn't work.

I see two possible alternatives:

  • Instead of enabling the autoscaling components when a profile is detected, we should enable them when we detect one of the supported workloads (deployments, statefulsets, argo) or namespaces labeled with autoscaling.datadoghq.com/profile. We could use a metadata-only informer for this. It would be a matter of replicating the informers already used for this in the WorkloadWatcher. These informers have a cost, but should be very low comparing to an informer for pods in a large cluster.
  • Just gate the pod informer part and start the rest of autoscaling components normally. This is a simple solution, just gate what we know should be the most expensive part by far, and start all the rest (including OOTB profiles reconciliation, etc.) normally. Simpler solution, but more things running for users that do not use autoscaling.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I pushed a new commit that goes with option 1: [clusteragent/autoscaling/workload] Trigger gate on profile-labeled resources

Copy link
Copy Markdown
Contributor

@gabedos gabedos left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Autoscaling gate for DCA pod collection lgtm

chouetz pushed a commit that referenced this pull request May 13, 2026
### What does this PR do?

This PR adds a new workloadmeta catalog specific to the cluster-agent.

This is similar to what already exists for other agents: there's a catalog for dogstatsd, otel, etc.

The reason I'm doing this is that the cluster-agent only needs one collector from the catalog: kubeapiserver. And this collector isn't needed in any other sub-agent. I think having a dedicated catalog makes the code easier to reason about, and avoids pulling in dependencies we don't need (not many in this case).

This change also helps simplify another PR I have open: #50305

I think there's something else we can do about workloadmeta catalogs. There's a "global" catalog, but I think most places that use it could use the "core" catalog instead. I'll leave this for a future PR to avoid introducing too many changes at once.

### Describe how you validated your changes

CI + deployed locally on a kind cluster. I verified that kubeapiserver collector still works in the DCA by checking `agent workload-list`. Also verified that the `agent check` command still works in the DCA (`agent check kubernetes_apiserver`).

Co-authored-by: david.ortiz <david.ortiz@datadoghq.com>
@davidor davidor force-pushed the davidor/contp-1632-autoscaling-lazy-start branch from af2740e to 998d394 Compare May 15, 2026 10:44
@davidor davidor changed the title [clusteragent/autoscaling] Defer autoscaling stack startup until first DPA [clusteragent/autoscaling] Defer autoscaling stack startup until first DPA or autoscaling workload May 15, 2026
@davidor davidor requested a review from a team as a code owner May 15, 2026 10:56
Copy link
Copy Markdown
Contributor

@JSGette JSGette left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK for @DataDog/agent-build scope

@davidor
Copy link
Copy Markdown
Member Author

davidor commented May 15, 2026

I rebased on top of main to fix conflicts. Also added a new commit to address @vboulineau comment in #50305 (comment)
I also needed to push a new commit to fix bazel

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

changelog/no-changelog No changelog entry needed internal Identify a non-fork PR long review PR is complex, plan time to review it qa/done QA done before merge and regressions are covered by tests team/agent-build team/container-integrations team/container-platform The Container Platform Team

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants